41,084 research outputs found

    Inverse Optimization of Convex Risk Functions

    Full text link
    The theory of convex risk functions has now been well established as the basis for identifying the families of risk functions that should be used in risk averse optimization problems. Despite its theoretical appeal, the implementation of a convex risk function remains difficult, as there is little guidance regarding how a convex risk function should be chosen so that it also well represents one's own risk preferences. In this paper, we address this issue through the lens of inverse optimization. Specifically, given solution data from some (forward) risk-averse optimization problems we develop an inverse optimization framework that generates a risk function that renders the solutions optimal for the forward problems. The framework incorporates the well-known properties of convex risk functions, namely, monotonicity, convexity, translation invariance, and law invariance, as the general information about candidate risk functions, and also the feedbacks from individuals, which include an initial estimate of the risk function and pairwise comparisons among random losses, as the more specific information. Our framework is particularly novel in that unlike classical inverse optimization, no parametric assumption is made about the risk function, i.e. it is non-parametric. We show how the resulting inverse optimization problems can be reformulated as convex programs and are polynomially solvable if the corresponding forward problems are polynomially solvable. We illustrate the imputed risk functions in a portfolio selection problem and demonstrate their practical value using real-life data

    Unsupervised Domain Adaptation on Reading Comprehension

    Full text link
    Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate this issue, we are going to investigate unsupervised domain adaptation on RC, wherein a model is trained on labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, the performance is still unsatisfactory when the model trained on one dataset is directly applied to another target dataset. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable accuracy to supervised models on multiple large-scale benchmark datasets.Comment: 8 pages, 6 figures, 5 tables, Accepted by AAAI 202
    corecore